机器学习模型容易对远离培训分布的投入进行错误的预测。这阻碍了他们在自动驾驶汽车和医疗保健等安全至关重要应用中的部署。从单个数据点的训练分布转移的检测引起了人们的注意。已经提出了许多用于分发(OOD)检测的技术。但是在许多应用中,机器学习模型的输入形成了时间序列。时间序列数据中的OOD检测技术要么不利用序列中的时间关系,要么不提供任何检测保证。我们建议将偏离分布式时间均衡力偏差作为在时间序列数据中进行OOD检测的保形异常检测框架中的不符合度量度。导致提议的检测器编码,并保证在时间序列数据中进行虚假检测。我们通过在自动驾驶中实现计算机视觉数据集的最新结果来说明编码的功效。我们还表明,通过在生理步态感觉数据集上执行实验,可以将CODIT用于非视觉数据集中的OOD检测。代码,数据和训练有素的模型可在https://github.com/kaustubhsridhar/time-series-ood上找到。
translated by 谷歌翻译
对抗性训练(AT)及其变体在过去几年来改善对对抗性扰动和常见腐败的神经网络的鲁棒性方面取得了长足的进步。 AT及其变体的算法设计集中在指定的扰动强度$ \ epsilon $上,并且仅利用该$ \ epsilon $ -Robust模型的性能的反馈来改善算法。在这项工作中,我们专注于在$ \ epsilon $值的频谱上训练的模型。我们分析了三个观点:模型性能,中间特征精度和卷积滤波器灵敏度。在每种情况下,我们都会确定AT的替代改进,否则在单个$ \ epsilon $中并不明显。具体来说,我们发现,对于以某种强度$ \ delta $的pgd攻击,有一个型号以某种稍大的强度$ \ epsilon $,但没有更大的范围,可以概括它。因此,我们建议过度设计鲁棒性,我们建议以$ \ epsilon $略高于$ \ delta $的培训模型。其次,我们观察到(在各种$ \ epsilon $值中),鲁棒性对中间特征的精度,尤其是在第一层和第二层之后的精度高度敏感。因此,我们建议在防御措施中添加简单的量化,以提高可见和看不见的适应性攻击的准确性。第三,我们分析了增加$ \ epsilon $的每一层模型的卷积过滤器,并注意到第一和第二层的卷积过滤器可能完全负责放大输入扰动。我们通过在CIFAR-10和CIFAR-10-C数据集上使用Resnet和WideSnet模型进行实验,介绍我们的发现并证明我们的技术。
translated by 谷歌翻译
诸如深神经网络(DNN)之类的机器学习方法,尽管他们在不同域中取得了成功,但是众所周知,通常在训练分布之外的输入上具有高信心产生不正确的预测。在安全关键域中的DNN部署需要检测分配超出(OOD)数据,以便DNN可以避免对那些人进行预测。最近已经开发了许多方法,以便检测,但仍有改进余地。我们提出了新的方法IdeCode,利用了用于共形OOD检测的分销标准。它依赖于在电感共形异常检测框架中使用的新基础非符合性测量和新的聚合方法,从而保证了有界误报率。我们通过在图像和音频数据集上的实验中展示了IDecode的功效,获得了最先进的结果。我们还表明Idecode可以检测对抗性示例。
translated by 谷歌翻译
Research has shown that climate change creates warmer temperatures and drier conditions, leading to longer wildfire seasons and increased wildfire risks in the United States. These factors have in turn led to increases in the frequency, extent, and severity of wildfires in recent years. Given the danger posed by wildland fires to people, property, wildlife, and the environment, there is an urgency to provide tools for effective wildfire management. Early detection of wildfires is essential to minimizing potentially catastrophic destruction. In this paper, we present our work on integrating multiple data sources in SmokeyNet, a deep learning model using spatio-temporal information to detect smoke from wildland fires. Camera image data is integrated with weather sensor measurements and processed by SmokeyNet to create a multimodal wildland fire smoke detection system. We present our results comparing performance in terms of both accuracy and time-to-detection for multimodal data vs. a single data source. With a time-to-detection of only a few minutes, SmokeyNet can serve as an automated early notification system, providing a useful tool in the fight against destructive wildfires.
translated by 谷歌翻译
With large-scale adaption to biometric based applications, security and privacy of biometrics is utmost important especially when operating in unsupervised online mode. This work proposes a novel approach for generating new artificial fingerprints also called proxy fingerprints that are natural looking, non-invertible, revocable and privacy preserving. These proxy biometrics can be generated from original ones only with the help of a user-specific key. Instead of using the original fingerprint, these proxy templates can be used anywhere with same convenience. The manuscripts walks through an interesting way in which proxy fingerprints of different types can be generated and how they can be combined with use-specific keys to provide revocability and cancelability in case of compromise. Using the proposed approach a proxy dataset is generated from samples belonging to Anguli fingerprint database. Matching experiments were performed on the new set which is 5 times larger than the original, and it was found that their performance is at par with 0 FAR and 0 FRR in the stolen key, safe key scenarios. Other parameters on revocability and diversity are also analyzed for protection performance.
translated by 谷歌翻译
Recently, online social media has become a primary source for new information and misinformation or rumours. In the absence of an automatic rumour detection system the propagation of rumours has increased manifold leading to serious societal damages. In this work, we propose a novel method for building automatic rumour detection system by focusing on oversampling to alleviating the fundamental challenges of class imbalance in rumour detection task. Our oversampling method relies on contextualised data augmentation to generate synthetic samples for underrepresented classes in the dataset. The key idea exploits selection of tweets in a thread for augmentation which can be achieved by introducing a non-random selection criteria to focus the augmentation process on relevant tweets. Furthermore, we propose two graph neural networks(GNN) to model non-linear conversations on a thread. To enhance the tweet representations in our method we employed a custom feature selection technique based on state-of-the-art BERTweet model. Experiments of three publicly available datasets confirm that 1) our GNN models outperform the the current state-of-the-art classifiers by more than 20%(F1-score); 2) our oversampling technique increases the model performance by more than 9%;(F1-score) 3) focusing on relevant tweets for data augmentation via non-random selection criteria can further improve the results; and 4) our method has superior capabilities to detect rumours at very early stage.
translated by 谷歌翻译
A number of competing hypotheses have been proposed to explain why small-batch Stochastic Gradient Descent (SGD)leads to improved generalization over the full-batch regime, with recent work crediting the implicit regularization of various quantities throughout training. However, to date, empirical evidence assessing the explanatory power of these hypotheses is lacking. In this paper, we conduct an extensive empirical evaluation, focusing on the ability of various theorized mechanisms to close the small-to-large batch generalization gap. Additionally, we characterize how the quantities that SGD has been claimed to (implicitly) regularize change over the course of training. By using micro-batches, i.e. disjoint smaller subsets of each mini-batch, we empirically show that explicitly penalizing the gradient norm or the Fisher Information Matrix trace, averaged over micro-batches, in the large-batch regime recovers small-batch SGD generalization, whereas Jacobian-based regularizations fail to do so. This generalization performance is shown to often be correlated with how well the regularized model's gradient norms resemble those of small-batch SGD. We additionally show that this behavior breaks down as the micro-batch size approaches the batch size. Finally, we note that in this line of inquiry, positive experimental findings on CIFAR10 are often reversed on other datasets like CIFAR100, highlighting the need to test hypotheses on a wider collection of datasets.
translated by 谷歌翻译
The compute-intensive nature of neural networks (NNs) limits their deployment in resource-constrained environments such as cell phones, drones, autonomous robots, etc. Hence, developing robust sparse models fit for safety-critical applications has been an issue of longstanding interest. Though adversarial training with model sparsification has been combined to attain the goal, conventional adversarial training approaches provide no formal guarantee that the models would be robust against any rogue samples in a restricted space around a benign sample. Recently proposed verified local robustness techniques provide such a guarantee. This is the first paper that combines the ideas from verified local robustness and dynamic sparse training to develop `SparseVLR'-- a novel framework to search verified locally robust sparse networks. Obtained sparse models exhibit accuracy and robustness comparable to their dense counterparts at sparsity as high as 99%. Furthermore, unlike most conventional sparsification techniques, SparseVLR does not require a pre-trained dense model, reducing the training time by 50%. We exhaustively investigated SparseVLR's efficacy and generalizability by evaluating various benchmark and application-specific datasets across several models.
translated by 谷歌翻译
In many real-world problems, the learning agent needs to learn a problem's abstractions and solution simultaneously. However, most such abstractions need to be designed and refined by hand for different problems and domains of application. This paper presents a novel top-down approach for constructing state abstractions while carrying out reinforcement learning. Starting with state variables and a simulator, it presents a novel domain-independent approach for dynamically computing an abstraction based on the dispersion of Q-values in abstract states as the agent continues acting and learning. Extensive empirical evaluation on multiple domains and problems shows that this approach automatically learns abstractions that are finely-tuned to the problem, yield powerful sample efficiency, and result in the RL agent significantly outperforming existing approaches.
translated by 谷歌翻译
医疗保健是人类生活的重要方面。大流行后,在医疗保健中使用技术的流形增加了。文献中提出的基于物联网的系统和设备可以帮助老年人,儿童和成人面临/经历健康问题。本文详尽地回顾了39个基于可穿戴的数据集,这些数据集可用于评估系统以识别日常生活和跌倒活动。使用五种机器学习方法,即逻辑回归,线性判别分析,K-Nearest邻居,决策树和幼稚的贝叶斯对SIFFALL数据集进行比较分析。数据集以两种方式进行修改,首先使用数据集中存在的所有属性,并以二进制形式标记。第二,计算三个轴(x,y,z)的三个轴(x,y,z)的幅度,然后计算出用于标签属性的实验。实验是对一个受试者,十个受试者和所有受试者进行的,并在准确性,精度和召回方面进行比较。从这项研究中获得的结果证明,KNN在准确性,精度和召回方面胜过其他机器学习方法。还可以得出结论,数据个性化提高了准确性。
translated by 谷歌翻译